171 research outputs found
Эффективность использования жидкофазных огнетушащих составов на объектах энергетики
В данной статье рассматривается использование существующих огнетушащих составов. Более подробно освещается вопрос применения жидкофазных огнетушащих составов на объектах энергетики, дается их сравнительный анализ, и по соответствующим критериям оценки таких веществ определяется наиболее эффективный
Модификация алгоритма канни применительно к обработке рентгенографических изображений
Рассмотрена модификация алгоритма определения границ Канни для быстрого выделения и поиска положения объекта на рентгенографическом изображении, установлена возможность автоматизированной настройки параметров при нормализации гистограммы изображения
Efficient ASIC Architectures for Low Latency Niederreiter Decryption
Post-quantum cryptography addresses the increasing threat that quantum computing poses to modern communication systems. Among the available quantum-resistant systems, the Niederreiter cryptosystem is positioned as a conservative choice with strong security guarantees. As a code-based cryptosystem, the Niederreiter system enables high performance operations and is thus ideally suited for applications such as the acceleration of server workloads. However, until now, no ASIC architecture is available for low latency computation of Niederreiter operations. Therefore, the present work targets the design, implementation and optimization of tailored archi- tectures for low latency Niederreiter decryption. Two architectures utilizing different decoding algorithms are proposed and implemented using a 22nm FDSOI CMOS technology node. One of these optimized architectures improves the decryption latency by 27% compared to a state-of-the-art reference and requires at the same time only 25% of the area
neuroAIx-Framework: design of future neuroscience simulation systems exhibiting execution of the cortical microcircuit model 20× faster than biological real-time
IntroductionResearch in the field of computational neuroscience relies on highly capable simulation platforms. With real-time capabilities surpassed for established models like the cortical microcircuit, it is time to conceive next-generation systems: neuroscience simulators providing significant acceleration, even for larger networks with natural density, biologically plausible multi-compartment models and the modeling of long-term and structural plasticity.MethodsStressing the need for agility to adapt to new concepts or findings in the domain of neuroscience, we have developed the neuroAIx-Framework consisting of an empirical modeling tool, a virtual prototype, and a cluster of FPGA boards. This framework is designed to support and accelerate the continuous development of such platforms driven by new insights in neuroscience.ResultsBased on design space explorations using this framework, we devised and realized an FPGA cluster consisting of 35 NetFPGA SUME boards.DiscussionThis system functions as an evaluation platform for our framework. At the same time, it resulted in a fully deterministic neuroscience simulation system surpassing the state of the art in both performance and energy efficiency. It is capable of simulating the microcircuit with 20× acceleration compared to biological real-time and achieves an energy efficiency of 48nJ per synaptic event
Classification of Resilience Techniques Against Functional Errors at Higher Abstraction Layers of Digital Systems
Nanoscale technology nodes bring reliability concerns back to the center stage of digital system design. A systematic classification of approaches that increase system resilience in the presence of functional hardware (HW)-induced errors is presented, dealing with higher system abstractions, such as the (micro) architecture, the mapping, and platform software (SW). The field is surveyed in a systematic way based on nonoverlapping categories, which add insight into the ongoing work by exposing similarities and differences. HW and SW solutions are discussed in a similar fashion so that interrelationships become apparent. The presented categories are illustrated by representative literature examples to illustrate their properties. Moreover, it is demonstrated how hybrid schemes can be decomposed into their primitive components
Resolving the Memory Bottleneck for Single Supply Near-Threshold Computing
This paper focuses on a review of state-of-the-art memory designs and new design methods for near-threshold computing (NTC). In particular, it presents new ways to design reliable low-voltage NTC memories cost-effectively by reusing available cell libraries, or by adding a digital wrapper around existing commercially available memories. The approach is based on modeling at system level supported by silicon measurement on a test chip in a 40nm low-power processing technology. Advanced monitoring, control and run-time error mitigation schemes enable the operation of these memories at the same optimal near-Vt voltage level as the digital logic. Reliability degradation is thus overcome and this opens the way to solve the memory bottleneck in NTC systems. Starting from the available 40 nm silicon measurements, the analysis is extended to future 14 and 10 nm technology nodes
Measurement of the cosmic ray spectrum above eV using inclined events detected with the Pierre Auger Observatory
A measurement of the cosmic-ray spectrum for energies exceeding
eV is presented, which is based on the analysis of showers
with zenith angles greater than detected with the Pierre Auger
Observatory between 1 January 2004 and 31 December 2013. The measured spectrum
confirms a flux suppression at the highest energies. Above
eV, the "ankle", the flux can be described by a power law with
index followed by
a smooth suppression region. For the energy () at which the
spectral flux has fallen to one-half of its extrapolated value in the absence
of suppression, we find
eV.Comment: Replaced with published version. Added journal reference and DO
- …